Relative End-Effector Control Using Cartesian Position Based Visual Servoing
- Authors: Wilson, W.; Williams Hulls, C.; Bell, G.;
- Venue: Transactions on Robotics and Automation
- Year: 1996
- Reviewed by: Joe Kershaw,
Overview
This article focuses on how to use a camera in order to improve state estimation of a robot end-effector.
Overall Problem
Precise knowledge of a robot's end-effector is a primary issue in robotics. Even with inverse dynamics, the error build up over time can cause all control problems relating to the system become inaccurate. To combat this, extra sensors get utilized in the control algorithm to better estimate the pose of the robot. The sensing method chosen for this paper is a camera.
Solutions
In addition to the classical vision control of planar motion this paper implements depth into the control algorithm. This is achieved by first "teaching" the robot what the target object looks like by assigning feature points that the robot bases the frame around. Similar to drive to points in manufacturing.
In order to take advantage of the additional sensor information an Extended Kalman Filter was used for the Bayesian inference. This helps to achieve the depth factor of the visual detection, since when the Z-axis is utilized there is very little change to the image on the camera.
Though the model for determining the system position is non-linear, the usage of the Kalman Filter(which is optimal for linear systems) is still used since it is calculated for short distances and times it is approximately linear, leading to good performance.
The control is done by first using forward kinematics and modifying the result using the information from the camera in order to slightly modify the final change in joints as opposed to just the inverse of the Jacobian multiplied by the goal.
Choices of object targets were key. The algorithm was able to act better when targeting holes of an object than the corners.
The multiple target points on the object allowed the controller to work even when occlusion occurred. Due to ease of implantation, points that were occluded were assumed to not have changed from the original state BUT the uncertainty of the location of that point was dramatized.
Results
Utilizing this method created much more accurate pose estimation compared to traditional kinematics by up to three times higher.
During control experiments the algorithm closely followed a proposed path, but at delay of up to 200ms. This was noted as an area for improvement. However the robustness of the algorithm was proven during experiments involving occluded points were run, it produced similar results.
Additional comments
One of the large issues in this paper was lack of computing power. The researchers had issues with the speeds required to utilize the PD controller. This is becoming increasingly less problematic as computing power is still following Moore's law, and these calculations are relatively trivial in the modern day.
Additionally modern robotics can utilize depth cameras in order to get more prevalent data with regards to depth estimation. This is covered well in Depth Camera Based Collision Avoidance Via Active Robot Control from 2014 which utilizes the Kinect camera, as do many other image based tracking challenges.
© Hasan Poonawala. Last modified: March 18, 2021. Website built with Franklin.jl and the Julia programming language.